29 research outputs found

    Nonanticipating estimation applied to sequential analysis and changepoint detection

    Get PDF
    Suppose a process yields independent observations whose distributions belong to a family parameterized by \theta\in\Theta. When the process is in control, the observations are i.i.d. with a known parameter value \theta_0. When the process is out of control, the parameter changes. We apply an idea of Robbins and Siegmund [Proc. Sixth Berkeley Symp. Math. Statist. Probab. 4 (1972) 37-41] to construct a class of sequential tests and detection schemes whereby the unknown post-change parameters are estimated. This approach is especially useful in situations where the parametric space is intricate and mixture-type rules are operationally or conceptually difficult to formulate. We exemplify our approach by applying it to the problem of detecting a change in the shape parameter of a Gamma distribution, in both a univariate and a multivariate setting.Comment: Published at http://dx.doi.org/10.1214/009053605000000183 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Extreme(ly) mean(ingful): Sequential formation of a quality group

    Full text link
    The present paper studies the limiting behavior of the average score of a sequentially selected group of items or individuals, the underlying distribution of which, FF, belongs to the Gumbel domain of attraction of extreme value distributions. This class contains the Normal, Lognormal, Gamma, Weibull and many other distributions. The selection rules are the "better than average" (Ξ²=1\beta=1) and the "Ξ²\beta-better than average" rule, defined as follows. After the first item is selected, another item is admitted into the group if and only if its score is greater than Ξ²\beta times the average score of those already selected. Denote by YΛ‰k\bar{Y}_k the average of the kk first selected items, and by TkT_k the time it takes to amass them. Some of the key results obtained are: under mild conditions, for the better than average rule, YΛ‰k\bar{Y}_k less a suitable chosen function of log⁑k\log k converges almost surely to a finite random variable. When 1βˆ’F(x)=eβˆ’[xΞ±+h(x)]1-F(x)=e^{-[x^{\alpha}+h(x)]}, Ξ±>0\alpha>0 and h(x)/xα⟢xβ†’βˆž0h(x)/x^{\alpha}\stackrel{x\rightarrow \infty}{\longrightarrow}0, then TkT_k is of approximate order k2k^2. When Ξ²>1\beta>1, the asymptotic results for YΛ‰k\bar{Y}_k are of a completely different order of magnitude. Interestingly, for a class of distributions, TkT_k, suitably normalized, asymptotically approaches 1, almost surely for relatively small Ξ²β‰₯1\beta\ge1, in probability for moderate sized Ξ²\beta and in distribution when Ξ²\beta is large.Comment: Published in at http://dx.doi.org/10.1214/10-AAP684 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sequential Change-Point Detection Procedures That are Nearly Optimal and Computationally Simple

    No full text
    Sequential schemes for detecting a change in distribution often require that all of the observations be stored in memory. Lai (1995, Journal of Royal Statistical Society, Series B 57 : 613 – 658) proposed a class of detection schemes that enable one to retain a finite window of the most recent observations, yet promise first-order optimality. The asymptotics are such that the window size is asymptotically unbounded. We argue that what's of computational importance isn't having a finite window of observations, but rather making do with a finite number of registers. We illustrate in the context of detecting a change in the parameter of an exponential family that one can achieve eventually even second-order asymptotic optimality through using only three registers for storing information of the past. We propose a very simple procedure, and show by simulation that it is highly efficient for typical applications

    Sequential Change-Point Detection Procedures That are Nearly Optimal and Computationally Simple

    No full text
    Sequential schemes for detecting a change in distribution often require that all of the observations be stored in memory. Lai (1995, Journal of Royal Statistical Society, Series B 57 : 613 – 658) proposed a class of detection schemes that enable one to retain a finite window of the most recent observations, yet promise first-order optimality. The asymptotics are such that the window size is asymptotically unbounded. We argue that what's of computational importance isn't having a finite window of observations, but rather making do with a finite number of registers. We illustrate in the context of detecting a change in the parameter of an exponential family that one can achieve eventually even second-order asymptotic optimality through using only three registers for storing information of the past. We propose a very simple procedure, and show by simulation that it is highly efficient for typical applications

    Abstract

    No full text
    We consider the first exit time of a nonnegative Harris-recurrent Markov process from the interval [0, A] as A β†’ ∞. We provide an alternative method of proof of asymptotic exponentiality of the first exit time (suitably standardized) that does not rely on embedding in a regeneration process. We show that under certain conditions the moment generating function of a suitably standardized version of the first exit time converges to that of Exponential(1), and we connect between the standardizing constant and the quasi-stationary distribution (assuming it exists). The results are applied to the evaluation of a distribution of run length to false alarm in change-point detection problems
    corecore